62 research outputs found

    Explaining Scientific Collaboration: a General Functional Account

    Get PDF
    For two centuries, collaborative research has become increasingly widespread. Various explanations of this trend have been proposed. Here, we offer a novel functional explanation of it. It differs from ac- counts like that of Wray (2002) by the precise socio-epistemic mech- anism that grounds the beneficialness of collaboration. Boyer-Kassem and Imbert (2015) show how minor differences in the step-efficiency of collaborative groups can make them much more successful in particular configurations. We investigate this model further, derive robust social patterns concerning the general successfulness of collaborative groups, and argue that these patterns can be used to defend a general functional account

    Scientific Collaboration: Do Two Heads Need to Be More than Twice Better than One?

    Get PDF
    Epistemic accounts of scientific collaboration usually assume that, one way or another, two heads really are more than twice better than one. We show that this hypothesis is unduly strong. We present a deliberately crude model with unfavorable hypotheses. We show that, even then, when the priority rule is applied, large differences in successfulness can emerge from small differences in efficiency, with sometimes increasing marginal returns. We emphasize that success is sensitive to the structure of competing communities. Our results suggest that purely epistemic explanations of the efficiency of collaborations are less plausible but have much more powerful socioepistemic version

    Simulations, explication, comprĂ©hension : essai d’analyse critique

    Get PDF
    J’analyse dans cet article la valeur explicative que peuvent avoir les simulations numĂ©riques. On rencontre en effet souvent l’affirmation selon laquelle les simulations permettent de prĂ©dire, de reproduire ou d’imiter des phĂ©nomĂšnes, mais guĂšre de les expliquer. Les simulations rendraient aussi possible l’étude du comportement d’un systĂšme par la force brute du calcul mais n’apporteraient pas une comprĂ©hension rĂ©elle de ce systĂšme et de son comportement. Dans tous les cas, il semble que, Ă  tort ou Ă  raison, les simulations posent, du point de vue de leur valeur explicative, des problĂšmes spĂ©cifiques qu’il convient de dĂ©mĂȘler et de dĂ©crire prĂ©cisĂ©ment. J’essaie dans cet article d’analyser systĂ©matiquement ces problĂšmes en utilisant comme guide les thĂ©ories existantes de l’explication. J’analyse d’abord le rapport des simulations Ă  la vĂ©ritĂ© (section 2). J’examine ensuite en quoi les simulations satisfont ou non les exigences de dĂ©ductivitĂ© et de nomicitĂ©, qui jouent un rĂŽle central dans le modĂšle de l’explication de Hempel (section 3). J’étudie dans quelle mesure les simulations sont aptes Ă  vĂ©hiculer l’information causale pertinente qu’on attend d’une bonne explication (section 4). Je poursuis en analysant en quoi l’abondance informationnelle et la lourdeur computationnelle des simulations peut sembler problĂ©matique par rapport au dĂ©veloppement de nos connaissances explicatives et de notre comprĂ©hension des phĂ©nomĂšnes (section 5). J’analyse enfin en quoi les simulations ont un rĂŽle unificateur comme cela est attendu des bonnes explications (section 6). Au final, cette Ă©tude permet de comprendre plus prĂ©cisĂ©ment pourquoi les simulations, alors mĂȘme qu’elles semblent pouvoir satisfaire les conditions que doivent remplir les bonnes explications, semblent spĂ©cifiquement problĂ©matiques au regard de l’activitĂ© explicative. Je suggĂšre que les raisons sont notamment Ă  chercher dans l’épistĂ©mologie de l’activitĂ© explicative, dans les attentes mĂ©thodologiques envers les bonnes explications et dans l’usage spĂ©cifique qu’on fait des simulations pour l’étude des cas difficiles – en plus du fait que les simulations constituent une activitĂ© qui n’est plus Ă  taille humaine.In this paper I investigate the potential explanatory value of computer simulations, which are often said to be suitable tools for the prediction, emulation or imitation of phenomena, but not for their explanation. Similarly, they are described as providing brute force (number-crunching) methods that are helpful to investigate the behaviour of physical systems but less so when it comes to understanding them. Be this as it may, wrongly or not simulations seem to be raising specific problems concerning their potential explanatory value. I try to systematically analyze these problems and for this purpose use existing theories of explanations as analytical guidelines. I first investigate how simulations fare concerning truth (section 2) and proceed by analyzing whether they satisfy the deductivity and nomicity requirements, which play a central role in Hempel’s model of explanation (section 3). I also study whether simulations are appropriate vehicles for relevant causal information which a good explanation is expected to provide (section 4). I then analyze how much the informational repleteness and the inferential or computational stodginess of computer simulations are problems for the development of explanatory knowledge and our understanding of phenomena (section 5). I finally analyze how much computer simulations can have a unificatory role as good explanations arguably do (section 6). Overall, I try to analyze why computer simulations still raise specific problems concerning explanations while seemingly being able to meet the criteria that explanations should fulfil. I suggest that some of these reasons are to be found in the epistemology of explanatory activity, in methodological expectations towards good explanations and in the specific uses that are made of computer simulations to investigate complex systems—in addition to the fact that computer simulations are not human-sized activities and that, as such, they may fail to provide first person understanding of phenomena

    Inferential power, formalisms, and scientific models

    Get PDF
    Scientific models need to be investigated if they are to provide valuable information about the systems they represent. Surprisingly, the epistemological question of what enables this investigation has hardly been investigated. Even authors who consider the inferential role of models as central, like Hughes (1997) or Bueno and Colyvan (2011), content themselves with claiming that models contain mathematical resources that provide inferential power. We claim that these notions require further analysis and argue that mathematical formalisms contribute to this inferential role. We characterize formalisms, illustrate how they extend our mathematical resources, and highlight how distinct formalisms offer various inferential affordances

    Formal verification, scientific code, and the epistemological heterogeneity of computational science

    Get PDF
    Various errors can affect scientific code and detecting them is a central concern within computational science. Could formal verification methods, which are now available tools, be widely adopted to guarantee the general reliability of scientific code? After discussing their benefits and drawbacks, we claim that, absent significant changes as regards features like their user-friendliness and versatility, these methods are unlikely to be adopted throughout computational science, beyond certain specific contexts for which they are well-suited. This issue exemplifies the epistemological heterogeneity of computational science: profoundly different practices can be appropriate to meet the reliability challenge that rises for scientific code

    Are larger studies always better? Sample size and data pooling effects in research communities

    Get PDF
    The persistent pervasiveness of inappropriately small studies in empirical fields is regu-larly deplored in scientific discussions. Consensually, taken individually, higher-powered studies are more likely to be truth-conducive. However, are they also beneficial for the wider performance of truth-seeking communities? We study the impact of sample sizes on collective exploration dynamics under ordinary conditions of resource limita-tion. We find that large collaborative studies, because they decrease diversity, can have detrimental effects in certain realistic circumstances that we characterize precisely. We show how limited inertia mechanisms may partially solve this pooling dilemma and dis-cuss our findings briefly in terms of editorial policies

    Explaining Scientific Collaboration: a General Functional Account

    Get PDF
    For two centuries, collaborative research has become increasingly widespread. Various explanations of this trend have been proposed. Here, we offer a novel functional explanation of it. It differs from ac- counts like that of Wray (2002) by the precise socio-epistemic mech- anism that grounds the beneficialness of collaboration. Boyer-Kassem and Imbert (2015) show how minor differences in the step-efficiency of collaborative groups can make them much more successful in particular configurations. We investigate this model further, derive robust social patterns concerning the general successfulness of collaborative groups, and argue that these patterns can be used to defend a general functional account

    Introduction

    Get PDF
    Introduction to the 2018 SPS conferenc

    Inferential power, formalisms, and scientific models

    Get PDF
    Scientific models need to be investigated if they are to provide valuable information about the systems they represent. Surprisingly, the epistemological question of what enables this investigation has hardly been investigated. Even authors who consider the inferential role of models as central, like Hughes (1997) or Bueno and Colyvan (2011), content themselves with claiming that models contain mathematical resources that provide inferential power. We claim that these notions require further analysis and argue that mathematical formalisms contribute to this inferential role. We characterize formalisms, illustrate how they extend our mathematical resources, and highlight how distinct formalisms offer various inferential affordances
    • 

    corecore